Event-driven contrastive divergence: neural sampling foundations
نویسندگان
چکیده
منابع مشابه
Event-driven contrastive divergence: neural sampling foundations
In a recent Frontiers in Neuroscience paper (Neftci et al., 2014) we contributed an on-line learning rule, driven by spike-events in an Integrate and Fire (IF) neural network, that emulates the learning performance of Contrastive Divergence (CD) in an equivalent Restricted Boltzmann Machine (RBM) amenable to real-time implementation in spike-based neuromorphic systems. The eventdriven CD framew...
متن کاملEvent-driven contrastive divergence for spiking neuromorphic systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissi...
متن کاملWeighted Contrastive Divergence
Learning algorithms for energy based Boltzmann architectures that rely on gradient descent are in general computationally prohibitive, typically due to the exponential number of terms involved in computing the partition function. In this way one has to resort to approximation schemes for the evaluation of the gradient. This is the case of Restricted Boltzmann Machines (RBM) and its learning alg...
متن کاملDifferential Contrastive Divergence
We formulate a differential version of contrastive divergence for continuous configuration spaces by considering a limit of MCMC processes in which the proposal distribution becomes infinitesimal. This leads to a deterministic differential contrastive divergence update — one in which no stochastic sampling is required. We prove convergence of differential contrastive divergence in general and p...
متن کاملWormholes Improve Contrastive Divergence
In models that define probabilities via energies, maximum likelihood learning typically involves using Markov Chain Monte Carlo to sample from the model’s distribution. If the Markov chain is started at the data distribution, learning often works well even if the chain is only run for a few time steps [3]. But if the data distribution contains modes separated by regions of very low density, bri...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Frontiers in Neuroscience
سال: 2015
ISSN: 1662-453X
DOI: 10.3389/fnins.2015.00104